-
Notifications
You must be signed in to change notification settings - Fork 678
TP + FSDP distributed training (full finetuning) #2330
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TP + FSDP distributed training (full finetuning) #2330
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/2330
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 5ab0933 with merge base fb52557 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2330 +/- ##
==========================================
+ Coverage 8.63% 65.62% +56.98%
==========================================
Files 311 358 +47
Lines 18848 21393 +2545
==========================================
+ Hits 1628 14039 +12411
+ Misses 17220 7354 -9866 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks pretty good - Can I see some tests + output though?
torchtune/training/_distributed.py
Outdated
fsdp_kwargs = {"reshard_after_forward": reshard_after_forward} | ||
if cpu_offload: | ||
fsdp_kwargs["offload_policy"] = CPUOffloadPolicy() | ||
if dp_mesh: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually think we can just do fsdp_kwargs["mesh"] = dp_mesh
since a None will just get passed through and the sharding function can handle that itself.
recipes/full_finetune_distributed.py
Outdated
utils.log_rank_zero( | ||
log, | ||
"FSDP is enabled. Instantiating model and loading checkpoint on Rank 0 ...", | ||
"FSDP and TP is enabled. Instantiating model and loading checkpoint on Rank 0 ...", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Distributed training is enabled. Instantiating model etc etc..."
recipes/full_finetune_distributed.py
Outdated
|
||
# Apply tensor parallelism to the model | ||
if self.tensor_parallel_dim > 1: | ||
if self.parallelize_plan is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would move this error up to init. Would be a shame to get all this way and then fail.
recipes/full_finetune_distributed.py
Outdated
raise ValueError( | ||
"Parallelism plan need to be provided when tensor parallel is enabled." | ||
) | ||
tp_mesh = self.device_mesh["tp"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: just use this the same way you do with dp e.g. don't create a new variable for tp_mesh, just pass it into the function.
recipes/full_finetune_distributed.py
Outdated
if self._compile: | ||
training.compile_model(model, verbose=self._is_rank_zero) | ||
|
||
self.device_mesh = dist.init_device_mesh( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would rather explicitly save what we need here. So instead of putting the whole device_mesh on the object, later on if data_parallel_dim > 1, save self.local_size = dp_mesh.size() and self.local_rank = dp_mesh.get_local_rank() else self.local_size = self.world_size and self.local_rank = self.rank.
Does that make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks awesome! Just need to make sure the merge changes are correct.
recipes/full_finetune_distributed.py
Outdated
utils.log_rank_zero( | ||
log, | ||
"FSDP is enabled. Instantiating model and loading checkpoint on Rank 0 ...", | ||
"Distributed training(FSDP and TP) is enabled. Instantiating model and loading checkpoint on Rank 0 ...", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: we don't really know if TP and FSDP are enabled at this point. TP is only for 70B models right now looks like. Could be confusing to users.
2d5785c
to
9d375fa
Compare
Context
What is the purpose of this PR? Is it to
Please link to any issues this PR addresses.
Changelog
What are the changes made in this PR?
Enabled TP training on top of FSDP. This is both from user request #2280, and also to improve memory so that we can get rid of
fsdp_cpu_offload
when training larger model.Note:
optimizer.fused
needs to be False).Test plan
Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.
pre-commit install
)pytest tests
pytest tests -m integration_test
Running TP = 2, FSPD = 4:




Running TP = 1, FSDP = 8:
UX
If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example